Personnel
Overall Objectives
Research Program
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Motion & Sound Synthesis

Animating objects in real-time is mandatory to enable user interaction during motion design. Physically-based models, an excellent paradigm for generating motions that a human user would expect, tend to lack efficiency for complex shapes due to their use of low-level geometry (such as fine meshes). Our goal is therefore two-folds: first, develop efficient physically-based models and collisions processing methods for arbitrary passive objects, by decoupling deformations from the possibly complex, geometric representation; second, study the combination of animation models with geometric responsive shapes, enabling the animation of complex constrained shapes in real-time. The last goal is to start developing coarse to fine animation models for virtual creatures, towards easier authoring of character animation for our work on narrative design.

Interactive paper tearing

Figure 4. Interactive paper tearing [23]. The path of a tear follows a geometrical curve but also presents stochastic details.
IMG/paper_tearing.png

In this work, we proposed an efficient method to model paper tearing in the context of interactive modeling [23]. This is illustrated in Figure 4. The method uses geometrical information to automatically detect potential starting points of tears. We further introduce a new hybrid geometrical and physical-based method to compute the trajectory of tears while procedurally synthesizing high resolution details of the tearing path using a texture based approach. The results obtained are compared with real paper and with previous studies on the expected geometric paths of paper that tears.

A Generative Audio-Visual Prosodic Model for Virtual Actors

Figure 5. A Generative Audio-Visual Prosodic Model for Virtual Actors [9]. The rows present corresponding frames extracted from (a) the video, (b) ground-truth animation, and (c) synthetic animation. From left to right, the images correspond to comforting, fascinated, thinking (male actor), fascinated, ironic, and scandalized (female actor) attitudes.
IMG/dramatic_attitudes.png

In this new work [9], we proposed a method for generating natural speech and facial animation in various attitudes using neutral speech and animation as input. This is illustrated in Figure 5. Given a neutral sentence, we use the phonotactic information to predict prosodic feature contours. The predicted rhythm is used to compute phoneme durations. The expressive speech is synthesized with a vocoder that uses the neutral utterance, predicted rhythm, energy, and voice pitch, and the facial animation parameters are obtained by adding the warped neutral motion to the reconstructed and warped predicted motion contours.

Which prosodic features contribute to the recognition of dramatic attitudes?

In this new work [10], we explored the capability of audiovisual prosodic features (such as fundamental frequency, head motion or facial expressions) to discriminate among different dramatic attitudes. We extracted the audiovisual parameters from an acted corpus of attitudes and structured them as frame, syllable and sentence-level features. Using Linear Discriminant Analysis classifiers, we showed that prosodic features present a higher discriminating rate at sentence-level. This finding is confirmed by the perceptual evaluation results of audio and/or visual stimuli obtained from the recorded attitudes.